713 research outputs found

    Resource Estimation for Large Scale, Real-Time Image Analysis on Live Video Cameras Worldwide

    Get PDF
    Thousands of public cameras live-stream an abundance of data to the Internet every day. If analyzed in real-time by computer programs, these cameras could provide unprecedented utility as a global sensory tool. For example, if cameras capture the scene of a fire, a system running image analysis software on their footage in real-time could be programmed to react appropriately (perhaps call firefighters). No such technology has been deployed at large scale because the sheer computing resources needed have yet to be determined. In order to help us build computer systems powerful enough to achieve such lifesaving feats, we developed a model that estimates the computer resources required for an experiment of that magnitude. The team is creating an experiment to demonstrate the feasibility of analyzing real-time images in a large scale. More specifically, the experiment aims to retrieve and analyze one billion images in 24 hours. Preliminary study suggests that this goal is attainable. This experiment will study the accuracy and performance of state-of-the-art image analysis solutions and reveal directions for future improvement

    Effects of Soy Peptide on Dendritic Cells

    Get PDF
    poster abstractInnate immunity is mediated by effector cells, including NK cells, dendritic cells (DCs), macrophages, and polymorphonuclear phagocytes, which can respond immediately after activation through receptors encoded by germ-line genes. Innate immune responses represent the first line of defense in immunosurveillance. Interventions that enhance the functions of innate immunity will be an important armamentarium to human health. We recently exploited a natural dietary soy peptide called lunasin to improve the immune functions. The hypothesis was that lunasin peptide has stimulatory effects on immune cells. To test this hypothesis, human peripheral blood mononuclear cells (PBMCs) of healthy donors were stimulated with or without lunasin. We found that lunasin is capable of stimulating DCs to up-regulate chemokines (CCL2, CCL3, and CCL4), cytokines (TNFα and IFNα), and co-stimulatory molecules (CD80, CD86). In addition, lunasin-treated DCs can provide NK with required signals for activation. Taken together, our results support the immunomodulatory activity of soy peptide on DCs, which leads to enhancement of innate immunity

    Body-as-Subject in the Four-Hand Illusion

    Get PDF
    In a recent study (Chen et al., 2018), we conducted a series of experiments that induced the “four-hand illusion”: using a head-mounted display (HMD), the participant adopted the experimenter's first-person perspective (1PP) as if it was his/her own 1PP. The participant saw four hands via the HMD: the experimenter's two hands from the adopted 1PP and the subject's own two hands from the adopted third-person perspective (3PP). In the active four-hand condition, the participant tapped his/her index fingers, imitated by the experimenter. Once all four hands acted synchronously and received synchronous tactile stimulations at the same time, many participants felt as if they owned two more hands. In this paper, we argue that there is a philosophical implication of this novel illusion. According to Merleau-Ponty (1945/1962) and Legrand (2010), one can experience one's own body or body-part either as-object or as-subject but cannot experience it as both simultaneously, i.e., these two experiences are mutually exclusive. Call this view the Experiential Exclusion Thesis. We contend that a key component of the four-hand illusion—the subjective experience of the 1PP-hands that involved both “kinesthetic sense of movement” and “visual sense of movement” (the movement that the participant sees via the HMD)—provides an important counter-example against this thesis. We argue that it is possible for a healthy subject to experience the same body-part both as-subject and as-object simultaneously. Our goal is not to annihilate the distinction between body-as-object and body-as-subject, but to show that it is not as rigid as suggested by the phenomenologists

    Modular Neural Networks for Low-Power Image Classification on Embedded Devices

    Get PDF
    Embedded devices are generally small, battery-powered computers with limited hardware resources. It is difficult to run deep neural networks (DNNs) on these devices, because DNNs perform millions of operations and consume significant amounts of energy. Prior research has shown that a considerable number of a DNN’s memory accesses and computation are redundant when performing tasks like image classification. To reduce this redundancy and thereby reduce the energy consumption of DNNs, we introduce the Modular Neural Network Tree architecture. Instead of using one large DNN for the classifier, this architecture uses multiple smaller DNNs (called modules) to progressively classify images into groups of categories based on a novel visual similarity metric. Once a group of categories is selected by a module, another module then continues to distinguish among the similar categories within the selected group. This process is repeated over multiple modules until we are left with a single category. The computation needed to distinguish dissimilar groups is avoided, thus reducing redundant operations, memory accesses, and energy. Experimental results using several image datasets reveal the effectiveness of our proposed solution to reduce memory requirements by 50% to 99%, inference time by 55% to 95%, energy consumption by 52% to 94%, and the number of operations by 15% to 99% when compared with existing DNN architectures, running on two different embedded systems: Raspberry Pi 3 and Raspberry Pi Zero

    Camera Placement Meeting Restrictions of Computer Vision

    Get PDF
    In the blooming era of smart edge devices, surveillance cam- eras have been deployed in many locations. Surveillance cam- eras are most useful when they are spaced out to maximize coverage of an area. However, deciding where to place cam- eras is an NP-hard problem and researchers have proposed heuristic solutions. Existing work does not consider a signifi- cant restriction of computer vision: in order to track a moving object, the object must occupy enough pixels. The number of pixels depends on many factors (how far away is the object? What is the camera resolution? What is the focal length?). In this study we propose a camera placement method that not only identifies effective camera placement in arbitrary spaces, but can account for different camera types as well. Our strat- egy represents spaces as polygons, then uses a greedy algo- rithm to partition the polygons and determine the cameras’ lo- cations to provide desired coverage. The solution also makes it possible to perform object tracking via overlapping camera placement. Our method is evaluated against complex shapes and real-world museum floor plans, achieving up to 82% cov- erage and 28% overlap

    Tree-based Unidirectional Neural Networks for Low-Power Computer Vision

    Get PDF
    This article describes the novel Tree-based Unidirectional Neural Network (TRUNK) architecture. This architecture improves computer vision efficiency by using a hierarchy of multiple shallow Convolutional Neural Networks (CNNs), instead of a single very deep CNN. We demonstrate this architecture’s versatility in performing different computer vision tasks efficiently on embedded devices. Across various computer vision tasks, the TRUNK architecture consumes 65% less energy and requires 50% less memory than representative low-power CNN architectures, e.g., MobileNet v2, when deployed on the NVIDIA Jetson Nano

    Directed Acyclic Graph-based Neural Networks for Tunable Low-Power Computer Vision

    Get PDF
    Processing visual data on mobile devices has many applications, e.g., emergency response and tracking. State-of-the-art computer vision techniques rely on large Deep Neural Networks (DNNs) that are usually too power-hungry to be deployed on resource-constrained edge devices. Many techniques improve DNN efficiency of DNNs by compromising accuracy. However, the accuracy and efficiency of these techniques cannot be adapted for diverse edge applications with different hardware constraints and accuracy requirements. This paper demonstrates that a recent, efficient tree-based DNN architecture, called the hierarchical DNN, can be converted into a Directed Acyclic Graph-based (DAG) architecture to provide tunable accuracy-efficiency tradeoff options. We propose a systematic method that identifies the connections that must be added to convert the tree to a DAG to improve accuracy. We conduct experiments on popular edge devices and show that increasing the connectivity of the DAG improves the accuracy to within 1% of the existing high accuracy techniques. Our approach requires 93% less memory, 43% less energy, and 49% fewer operations than the high accuracy techniques, thus providing more accuracy-efficiency configurations

    Irrelevant Pixels are Everywhere: Find and Exclude Them for More Efficient Computer Vision

    Full text link
    Computer vision is often performed using Convolutional Neural Networks (CNNs). CNNs are compute-intensive and challenging to deploy on power-contrained systems such as mobile and Internet-of-Things (IoT) devices. CNNs are compute-intensive because they indiscriminately compute many features on all pixels of the input image. We observe that, given a computer vision task, images often contain pixels that are irrelevant to the task. For example, if the task is looking for cars, pixels in the sky are not very useful. Therefore, we propose that a CNN be modified to only operate on relevant pixels to save computation and energy. We propose a method to study three popular computer vision datasets, finding that 48% of pixels are irrelevant. We also propose the focused convolution to modify a CNN's convolutional layers to reject the pixels that are marked irrelevant. On an embedded device, we observe no loss in accuracy, while inference latency, energy consumption, and multiply-add count are all reduced by about 45%

    Irrelevant Pixels are Everywhere: Find and Exclude Them for More Efficient Computer Vision

    Get PDF
    Computer vision is often performed using Convolutional Neural Networks (CNNs). CNNs are compute-intensive and challenging to deploy on power-constrained systems such as mobile and Internet-of-Things (IoT) devices. CNNs are compute-intensive because they indiscriminately compute many features on all pixels of the input image. We observe that, given a computer vision task, images often contain pixels that are irrelevant to the task. For example, if the task is looking for cars, pixels in the sky are not very useful. Therefore, we propose that a CNN be modified to only operate on relevant pixels to save computation and energy. We propose a method to study three popular computer vision datasets, finding that 48% of pixels are irrelevant. We also propose the focused convolution to modify a CNN’s convolutional layers to reject the pixels that are marked irrelevant. On an embedded device, we observe no loss in accuracy, while inference latency, energy consumption, and multiply-add count are all reduced by about 45%

    Observing Human Mobility Internationally During COVID-19

    Get PDF
    This article analyzes visual data captured from five countries and three U.S. states to evaluate the effectiveness of lockdown policies for reducing the spread of COVID-19. The main challenge is the scale: nearly six million images are analyzed to observe how people respond to the policy changes
    • …
    corecore